925 research outputs found
Decorated proofs for computational effects: Exceptions
We define a proof system for exceptions which is close to the syntax for
exceptions, in the sense that the exceptions do not appear explicitly in the
type of any expression. This proof system is sound with respect to the intended
denotational semantics of exceptions. With this inference system we prove
several properties of exceptions.Comment: 11 page
A duality between exceptions and states
In this short note we study the semantics of two basic computational effects,
exceptions and states, from a new point of view. In the handling of exceptions
we dissociate the control from the elementary operation which recovers from the
exception. In this way it becomes apparent that there is a duality, in the
categorical sense, between exceptions and states
States and exceptions considered as dual effects
In this paper we consider the two major computational effects of states and
exceptions, from the point of view of diagrammatic logics. We get a surprising
result: there exists a symmetry between these two effects, based on the
well-known categorical duality between products and coproducts. More precisely,
the lookup and update operations for states are respectively dual to the throw
and catch operations for exceptions. This symmetry is deeply hidden in the
programming languages; in order to unveil it, we start from the monoidal
equational logic and we add progressively the logical features which are
necessary for dealing with either effect. This approach gives rise to a new
point of view on states and exceptions, which bypasses the problems due to the
non-algebraicity of handling exceptions
Adjunctions for exceptions
An algebraic method is used to study the semantics of exceptions in computer
languages. The exceptions form a computational effect, in the sense that there
is an apparent mismatch between the syntax of exceptions and their intended
semantics. We solve this apparent contradiction by efining a logic for
exceptions with a proof system which is close to their syntax and where their
intended semantics can be seen as a model. This requires a robust framework for
logics and their morphisms, which is provided by categorical tools relying on
adjunctions, fractions and limit sketches.Comment: In this Version 2, minor improvements are made to Version
A Primal-Dual Proximal Algorithm for Sparse Template-Based Adaptive Filtering: Application to Seismic Multiple Removal
Unveiling meaningful geophysical information from seismic data requires to
deal with both random and structured "noises". As their amplitude may be
greater than signals of interest (primaries), additional prior information is
especially important in performing efficient signal separation. We address here
the problem of multiple reflections, caused by wave-field bouncing between
layers. Since only approximate models of these phenomena are available, we
propose a flexible framework for time-varying adaptive filtering of seismic
signals, using sparse representations, based on inaccurate templates. We recast
the joint estimation of adaptive filters and primaries in a new convex
variational formulation. This approach allows us to incorporate plausible
knowledge about noise statistics, data sparsity and slow filter variation in
parsimony-promoting wavelet frames. The designed primal-dual algorithm solves a
constrained minimization problem that alleviates standard regularization issues
in finding hyperparameters. The approach demonstrates significantly good
performance in low signal-to-noise ratio conditions, both for simulated and
real field seismic data
A constrained-based optimization approach for seismic data recovery problems
Random and structured noise both affect seismic data, hiding the reflections
of interest (primaries) that carry meaningful geophysical interpretation. When
the structured noise is composed of multiple reflections, its adaptive
cancellation is obtained through time-varying filtering, compensating
inaccuracies in given approximate templates. The under-determined problem can
then be formulated as a convex optimization one, providing estimates of both
filters and primaries. Within this framework, the criterion to be minimized
mainly consists of two parts: a data fidelity term and hard constraints
modeling a priori information. This formulation may avoid, or at least
facilitate, some parameter determination tasks, usually difficult to perform in
inverse problems. Not only classical constraints, such as sparsity, are
considered here, but also constraints expressed through hyperplanes, onto which
the projection is easy to compute. The latter constraints lead to improved
performance by further constraining the space of geophysically sound solutions.Comment: International Conference on Acoustics, Speech and Signal Processing
(ICASSP 2014); Special session "Seismic Signal Processing
Lapped transforms and hidden Markov models for seismic data filtering
International audienceSeismic exploration provides information about the ground substructures. Seismic images are generally corrupted by several noise sources. Hence, efficient denoising procedures are required to improve the detection of essential geological information. Wavelet bases provide sparse representation for a wide class of signals and images. This property makes them good candidates for efficient filtering tools, allowing the separation of signal and noise coefficients. Recent works have improved their performance by modelling the intra- and inter-scale coefficient dependencies using hidden Markov models, since image features tend to cluster and persist in the wavelet domain. This work focuses on the use of lapped transforms associated with hidden Markov modelling. Lapped transforms are traditionally viewed as block-transforms, composed of M pass-band filters. Seismic data present oscillatory patterns and lapped transforms oscillatory bases have demonstrated good performances for seismic data compression. A dyadic like representation of lapped transform coefficient is possible, allowing a wavelet-like modelling of coefficients dependencies. We show that the proposed filtering algorithm often outperforms the wavelet performance both objectively (in terms of SNR) and subjectively: lapped transform better preserve the oscillatory features present in seismic data at low to moderate noise levels
Euclid in a Taxicab: Sparse Blind Deconvolution with Smoothed l1/l2 Regularization
The l1/l2 ratio regularization function has shown good performance for
retrieving sparse signals in a number of recent works, in the context of blind
deconvolution. Indeed, it benefits from a scale invariance property much
desirable in the blind context. However, the l1/l2 function raises some
difficulties when solving the nonconvex and nonsmooth minimization problems
resulting from the use of such a penalty term in current restoration methods.
In this paper, we propose a new penalty based on a smooth approximation to the
l1/l2 function. In addition, we develop a proximal-based algorithm to solve
variational problems involving this function and we derive theoretical
convergence results. We demonstrate the effectiveness of our method through a
comparison with a recent alternating optimization strategy dealing with the
exact l1/l2 term, on an application to seismic data blind deconvolution.Comment: 5 page
- …